429 research outputs found

    Action related determinants of spatial coding in perception and memory

    Get PDF
    Abstract. Cognitive representations of spatial layouts are known to be affected by spatial as well as nonspatial characteristics of the stimulus configuration. We review findings from our lab suggesting that at least part of the effects of nonspatial factors originate already in perception and, hence, reflect principles of perceptual rather than memory organization. Moreover, we present evidence that action-related factors can also affect the organization of spatial information in perception and memory. A theoretical account of these effects is proposed, which assumes that cognitive object representations integrate spatial and nonspatial stimulus information as well as information about object-related actions

    Creative mood swings: divergent and convergent thinking affect mood in opposite ways

    Get PDF
    Increasing evidence suggests that emotions affect cognitive processes. Recent approaches have also considered the opposite: that cognitive processes might affect people’s mood. Here we show that performing and, to a lesser degree, preparing for a creative thinking task induce systematic mood swings: Divergent thinking led to a more positive mood, whereas convergent thinking had the opposite effect. This pattern suggests that thought processes and mood are systematically related but the type of relationship is process-specific

    Action–effect learning in early childhood: does language matter?

    Get PDF
    Previous work showed that language has an important function for the development of action control. This study examined the role of verbal processes for action–effect learning in 4-year-old children. Participants performed an acquisition phase including a two-choice key-pressing task in which each key press (action) was followed by a particular sound (effect). Children were instructed to either (1) label their actions along with the corresponding effects, (2) verbalize task-irrelevant words, (3) or perform without verbalization. In a subsequent test phase, they responded to the same sound effects either under consistent or under inconsistent sound-key mappings. Evidence for action–effect learning was obtained only if action and effects were labeled or if no verbalization was performed, but not if children verbalized task-irrelevant labels. Importantly, action–effect learning was most pronounced when children verbalized the actions and the corresponding effects, suggesting that task-relevant verbal labeling supports the integration of event representations

    Feature Integration Across the Lifespan: Stickier Stimulus–Response Bindings in Children and Older Adults

    Get PDF
    Humans integrate the features of perceived events and of action plans into episodic event files. Here we investigated whether children (9–10 years), younger adults (20–31 years), and older adults (64–76 years) differ in the flexibility of managing (updating) event files. Relative to young adults, performance in children and older adults was more hampered by partial mismatches between present and previous stimulus–response relations, suggesting less efficient updating of episodic stimulus–response representations in childhood and old age. Results are discussed in relation to changes in cortical neurochemistry during maturation and senescence

    The Roles of Action Selection and Actor Selection in Joint Task Settings

    Get PDF
    Studies on joint task performance have proposed that co-acting individuals co-represent the shared task context, which implies that actors integrate their co-actor’s task components into their own task representation as if they were all their own task. This proposal has been supported by results of joint tasks in which each actor is assigned a single response where selecting a response is equivalent to selecting an actor. The present study used joint task switching, which has previously shown switch costs on trials following the actor’s own trial (intrapersonal switch costs) but not on trials that followed the co-actor’s trial (interpersonal switch costs), suggesting that there is no task co-representation. We examined whether interpersonal switch costs can be obtained when action selection and actor selection are confounded as in previous joint task studies. The present results confirmed this prediction, demonstrating that switch costs can occur within a single actor as well as between co-actors when there is only a single response per actor, but not when there are two responses per actor. These results indicate that task co-representation is not necessarily implied even when effects occur across co-acting individuals and that how the task is divided between co-actors plays an important role in determining whether effects occur between co-actors

    Action-Effect Sharing Induces Task-Set Sharing in Joint Task Switching

    Get PDF
    A central issue in the study of joint task performance has been one of whether co-acting individuals perform their partner’s part of the task as if it were their own. The present study addressed this issue by using joint task switching. A pair of actors shared two tasks that were presented in a random order, whereby the relevant task and actor were cued on each trial. Responses produced action effects that were either shared or separate between co-actors. When co-actors produced separate action effects, switch costs were obtained within the same actor (i.e., when the same actor performed consecutive trials) but not between co-actors (when different actors performed consecutive trials), implying that actors did not perform their co-actor’s part. When the same action effects were shared between co-actors, however, switch costs were also obtained between co-actors, implying that actors did perform their co-actor’s part. The results indicated that shared action effects induce task-set sharing between co-acting individuals

    Imaging when acting: picture but not word cues induce action-related biases of visual attention

    Get PDF
    In line with the Theory of Event Coding (Hommel et al., 2001a), action planning has been shown to affect perceptual processing an effect that has been attributed to a so-called intentional weighting mechanism (Wykowska et al., 2009; Memelink and Hommel, 2012), whose functional role is to provide information for open parameters of online action adjustment (Hommel, 2010). The aim of this study was to test whether different types of action representations induce intentional weighting to various degrees. To meet this aim, we introduced a paradigm in which participants performed a visual search task while preparing to grasp or to point. The to-be performed movement was signaled either by a picture of a required action or a word cue. We reasoned that picture cues might trigger a more concrete action representation that would be more likely to activate the intentional weighting of perceptual dimensions that provide information for online action control. In contrast, word cues were expected to trigger a more abstract action representation that would be less likely to induce intentional weighting. In two experiments, preparing for an action facilitated the processing of targets in an unrelated search task if they differed from distractors on a dimension that provided information for online action control. As predicted, however, this effect was observed only if action preparation was signaled by picture cues but not if it was signaled by word cues. We conclude that picture cues are more efficient than word cues in activating the intentional weighting of perceptual dimensions, presumably by specifying not only invariant characteristics of the planned action but also the dimensions of action-specific parameters
    corecore